从制造环境到个人房屋的最终用户任务的巨大多样性使得预编程机器人非常具有挑战性。事实上,教学机器人从划痕的新行动可以重复使用以前看不见的任务仍然是一个艰难的挑战,一般都留给了机器人专家。在这项工作中,我们展示了Iropro,这是一个交互式机器人编程框架,允许最终用户没有技术背景,以教授机器人新的可重用行动。我们通过演示和自动规划技术将编程结合起来,以允许用户通过通过动力学示范教授新的行动来构建机器人的知识库。这些行动是概括的,并重用任务计划程序来解决用户定义的先前未经调查的问题。我们将iropro作为Baxter研究机器人的端到端系统实施,同时通过演示通过示范来教授低级和高级操作,以便用户可以通过图形用户界面自定义以适应其特定用例。为了评估我们的方法的可行性,我们首先进行了预设计实验,以更好地了解用户采用所涉及的概念和所提出的机器人编程过程。我们将结果与设计后实验进行比较,在那里我们进行了用户学习,以验证我们对真实最终用户的方法的可用性。总体而言,我们展示了具有不同编程水平和教育背景的用户可以轻松学习和使用Iropro及其机器人编程过程。
translated by 谷歌翻译
手动编码PDDL域通常被认为是困难,乏味的和容易出错的。当必须编码时间域时,难度更大。实际上,行动持续时间,它们的效果不是瞬间的。在本文中,我们提出了一种基于AMLSI方法的算法,该算法能够学习时间域。Tempamlsi基于在时间规划中完成的经典假设,即可以将非时间域转换为时间域。Tempamlsi是第一种能够使用单个硬信封和库欣的间隔学习时间域的方法。我们通过实验显示Tempamlsi能够学习准确的时间域,即可以直接用于解决新规划问题的时间域,具有不同形式的动作并发。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Backdoor attacks represent one of the major threats to machine learning models. Various efforts have been made to mitigate backdoors. However, existing defenses have become increasingly complex and often require high computational resources or may also jeopardize models' utility. In this work, we show that fine-tuning, one of the most common and easy-to-adopt machine learning training operations, can effectively remove backdoors from machine learning models while maintaining high model utility. Extensive experiments over three machine learning paradigms show that fine-tuning and our newly proposed super-fine-tuning achieve strong defense performance. Furthermore, we coin a new term, namely backdoor sequela, to measure the changes in model vulnerabilities to other attacks before and after the backdoor has been removed. Empirical evaluation shows that, compared to other defense methods, super-fine-tuning leaves limited backdoor sequela. We hope our results can help machine learning model owners better protect their models from backdoor threats. Also, it calls for the design of more advanced attacks in order to comprehensively assess machine learning models' backdoor vulnerabilities.
translated by 谷歌翻译
In a fissile material, the inherent multiplicity of neutrons born through induced fissions leads to correlations in their detection statistics. The correlations between neutrons can be used to trace back some characteristics of the fissile material. This technique known as neutron noise analysis has applications in nuclear safeguards or waste identification. It provides a non-destructive examination method for an unknown fissile material. This is an example of an inverse problem where the cause is inferred from observations of the consequences. However, neutron correlation measurements are often noisy because of the stochastic nature of the underlying processes. This makes the resolution of the inverse problem more complex since the measurements are strongly dependent on the material characteristics. A minor change in the material properties can lead to very different outputs. Such an inverse problem is said to be ill-posed. For an ill-posed inverse problem the inverse uncertainty quantification is crucial. Indeed, seemingly low noise in the data can lead to strong uncertainties in the estimation of the material properties. Moreover, the analytical framework commonly used to describe neutron correlations relies on strong physical assumptions and is thus inherently biased. This paper addresses dual goals. Firstly, surrogate models are used to improve neutron correlations predictions and quantify the errors on those predictions. Then, the inverse uncertainty quantification is performed to include the impact of measurement error alongside the residual model bias.
translated by 谷歌翻译
分层任务网络(HTN)形式主义用于将任务分解为staks表示各种计划问题。已经提出了许多技术来解决此类等级计划问题。一种特定的技术是将层次计划问题编码为经典条款规划问题。该技术的一个优点是直接受益于Strips Planners的不断改进。但是,仍然几乎没有有效和表现力的编码。在本文中,我们提出了一个新的HTN,以编码带有并发计划的编码。我们通过实验表明,这对层次IPC基准测试的编码优于先前的方法。
translated by 谷歌翻译
层次任务网络({\ sf htn})形式主义非常表现力,用于表达各种计划问题。与仅需要指定动作模型的古典{\ sf strips}形式主义相反,{\ sf htn}形式主义需要指定问题的任务及其分解为子任务,称为{\ \SF HTN}方法。因此,与经典计划问题相比,专家认为手工编码{\ sf htn}问题更困难和更容易容易出错。为了解决这个问题,我们提出了一种基于语法归纳的新方法(HIERAMLSI),以通过学习动作模型和{\ sf htn}方法获取{\ sf htn}计划域知识,并通过其前提条件获得{\ sf HTN}方法。与其他方法不同,Hieramlsi能够以高水平或准确的态度学习嘈杂和部分输入观察的动作和方法。
translated by 谷歌翻译
图像注册是医学成像应用中的关键任务,可以在常见的空间参考框架中表示医学图像。当前有关图像注册的文献通常基于以下假设:研究人员通常可以访问图像,随后可以估算空间转换。在当前的实际应用中可能无法满足这种共同的假设,因为医学图像的敏感性最终可能需要在隐私限制下进行分析,从而阻止以清晰的形式共享图像内容。在这项工作中,我们在保存隐私制度下制定了图像注册的问题,其中假定图像是机密的,不能清楚地披露。我们通过扩展经典的注册范例来说明高级加密工具(例如安全的多方计算和同派加密)来确定图像注册框架的隐私保护框架,从而使操作执行而无需泄漏基础数据。为了克服高维度中加密工具的性能和可扩展性问题,我们首先建议使用梯度近似值优化基础图像注册操作。我们进一步重新审视了同态加密的使用,并使用包装方法可以更有效地对大型矩阵进行加密和乘法。我们证明了我们在线性和非线性注册问题中保存隐私框架,并评估其相对于标准图像注册的准确性和可扩展性。我们的结果表明,保留图像注册的隐私是可行的,可以在敏感的医学成像应用中采用。
translated by 谷歌翻译
Machine Unerning是在收到删除请求时从机器学习(ML)模型中删除某些培训数据的影响的过程。虽然直接而合法,但从划痕中重新训练ML模型会导致高计算开销。为了解决这个问题,在图像和文本数据的域中提出了许多近似算法,其中SISA是最新的解决方案。它将训练集随机分配到多个碎片中,并为每个碎片训练一个组成模型。但是,将SISA直接应用于图形数据可能会严重损害图形结构信息,从而导致的ML模型实用程序。在本文中,我们提出了Grapheraser,这是一种针对图形数据量身定制的新型机器学习框架。它的贡献包括两种新型的图形分区算法和一种基于学习的聚合方法。我们在五个现实世界图数据集上进行了广泛的实验,以说明Grapheraser的学习效率和模型实用程序。它可以实现2.06 $ \ times $(小数据集)至35.94 $ \ times $(大数据集)未学习时间的改进。另一方面,Grapheraser的实现最高62.5美元\%$更高的F1分数,我们提出的基于学习的聚合方法可达到高达$ 112 \%$ $ F1分数。 github.com/minchen00/graph-unlearning}。}。}
translated by 谷歌翻译
Machine learning (ML) has become a core component of many real-world applications and training data is a key factor that drives current progress. This huge success has led Internet companies to deploy machine learning as a service (MLaaS). Recently, the first membership inference attack has shown that extraction of information on the training set is possible in such MLaaS settings, which has severe security and privacy implications.However, the early demonstrations of the feasibility of such attacks have many assumptions on the adversary, such as using multiple so-called shadow models, knowledge of the target model structure, and having a dataset from the same distribution as the target model's training data. We relax all these key assumptions, thereby showing that such attacks are very broadly applicable at low cost and thereby pose a more severe risk than previously thought. We present the most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains.In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
translated by 谷歌翻译